717 research outputs found

    How Chinese enterprises evaluate the investment value of seaports along the “One Belt One Road”

    Get PDF

    t-Deletion-s-Insertion-Burst Correcting Codes

    Full text link
    Motivated by applications in DNA-based storage and communication systems, we study deletion and insertion errors simultaneously in a burst. In particular, we study a type of error named tt-deletion-ss-insertion-burst ((t,s)(t,s)-burst for short) which is a generalization of the (2,1)(2,1)-burst error proposed by Schoeny {\it et. al}. Such an error deletes tt consecutive symbols and inserts an arbitrary sequence of length ss at the same coordinate. We provide a sphere-packing upper bound on the size of binary codes that can correct a (t,s)(t,s)-burst error, showing that the redundancy of such codes is at least logn+t1\log n+t-1. For t2st\geq 2s, an explicit construction of binary (t,s)(t,s)-burst correcting codes with redundancy logn+(ts1)loglogn+O(1)\log n+(t-s-1)\log\log n+O(1) is given. In particular, we construct a binary (3,1)(3,1)-burst correcting code with redundancy at most logn+9\log n+9, which is optimal up to a constant.Comment: Part of this work (the (t,1)-burst model) was presented at ISIT2022. This full version has been submitted to IEEE-IT in August 202

    Facial Expression Recognition using Vanilla ViT backbones with MAE Pretraining

    Full text link
    Humans usually convey emotions voluntarily or involuntarily by facial expressions. Automatically recognizing the basic expression (such as happiness, sadness, and neutral) from a facial image, i.e., facial expression recognition (FER), is extremely challenging and attracts much research interests. Large scale datasets and powerful inference models have been proposed to address the problem. Though considerable progress has been made, most of the state of the arts employing convolutional neural networks (CNNs) or elaborately modified Vision Transformers (ViTs) depend heavily on upstream supervised pretraining. Transformers are taking place the domination of CNNs in more and more computer vision tasks. But they usually need much more data to train, since they use less inductive biases compared with CNNs. To explore whether a vanilla ViT without extra training samples from upstream tasks is able to achieve competitive accuracy, we use a plain ViT with MAE pretraining to perform the FER task. Specifically, we first pretrain the original ViT as a Masked Autoencoder (MAE) on a large facial expression dataset without expression labels. Then, we fine-tune the ViT on popular facial expression datasets with expression labels. The presented method is quite competitive with 90.22\% on RAF-DB, 61.73\% on AfectNet and can serve as a simple yet strong ViT-based baseline for FER studies.Comment: 3 page

    Continuous chromatographic processes with a small number of columns: Comparison of simulated moving bed with Varicol, PowerFeed, and ModiCon

    Get PDF
    The Simulated Moving Bed process and its recent extensions called Varicol, PowerFeed and ModiCon are studied, in the case where a small number of columns are used, i.e. from three to five. A multiobjective optimization approach, using genetic algorithms and a detailed model of the multicolumn chromatographic process, is applied to optimize each process separately, and allow for comparison of the different operating modes. The non-standard SMB processes achieve better performance than SMB, due to the availability of more degrees of freedom in the operating conditions of the process, namely the way to carry out asynchronous switches for Varicol, and the different flow rates and feed concentration during the switching interval for PowerFeed and for ModiCon, respectively. We also consider the possibility of combining two non-standard operating modes in a new hybrid process, and evaluate also in this case the possible performance. Finally, a critical assessment of the results obtained and of the potential for practical implementation of the different techniques is reporte

    Principal Component Analysis of Galaxy Clustering in Hyperspace of Galaxy Properties

    Full text link
    Ongoing and upcoming galaxy surveys are providing precision measurements of galaxy clustering. However a major obstacle in its cosmological application is the stochasticity in the galaxy bias. We explore whether the principal component analysis (PCA) of galaxy correlation matrix in hyperspace of galaxy properties (e.g. magnitude and color) can reveal further information on mitigating this issue. Based on the hydrodynamic simulation TNG300-1, we analyze the cross power spectrum matrix of galaxies in the magnitude and color space of multiple photometric bands. (1) We find that the first principal component Ei(1)E_i^{(1)} is an excellent proxy of the galaxy deterministic bias bDb_{D}, in that Ei(1)=Pmm/λ(1)bD,iE_i^{(1)}=\sqrt{P_{mm}/\lambda(1)}b_{D,i}. Here ii denotes the ii-th galaxy sub-sample. λ(1)\lambda^{(1)} is the largest eigenvalue and PmmP_{mm} is the matter power spectrum. We verify that this relation holds for all the galaxy samples investigated, down to k2h/k\sim 2h/Mpc. Since Ei(1)E_i^{(1)} is a direct observable, we can utilize it to design a linear weighting scheme to suppress the stochasticity in the galaxy-matter relation. For an LSST-like magnitude limit galaxy sample, the stochasticity S1r2\mathcal{S}\equiv 1-r^2 can be suppressed by a factor of \ga 2 at k=1h/k=1h/Mpc. This reduces the stochasticity-induced systematic error in the matter power spectrum reconstruction combining galaxy clustering and galaxy-galaxy lensing from 12%\sim 12\% to 5%\sim 5\% at k=1h/k=1h/Mpc. (2) We also find that S\mathcal{S} increases monotonically with fλf_\lambda and fλ2f_{\lambda^2}. fλ,λ2f_{\lambda,\lambda^2} quantify the fractional contribution of other eigenmodes to the galaxy clustering and are direct observables. Therefore the two provide extra information on mitigating galaxy stochasticity

    RAR-U-Net: a Residual Encoder to Attention Decoder by Residual Connections Framework for Spine Segmentation under Noisy Labels

    Full text link
    Segmentation algorithms of medical image volumes are widely studied for many clinical and research purposes. We propose a novel and efficient framework for medical image segmentation. The framework functions under a deep learning paradigm, incorporating four novel contributions. Firstly, a residual interconnection is explored in different scale encoders. Secondly, four copy and crop connections are replaced to residual-block-based concatenation to alleviate the disparity between encoders and decoders, respectively. Thirdly, convolutional attention modules for feature refinement are studied on all scale decoders. Finally, an adaptive denoising learning strategy(ADL) based on the training process from underfitting to overfitting is studied. Experimental results are illustrated on a publicly available benchmark database of spine CTs. Our segmentation framework achieves competitive performance with other state-of-the-art methods over a variety of different evaluation measures
    corecore